# Low-resource efficient inference
Samastam It V1
Samastam is an early instruction-tuned variant of the Sarvam-1 model, fine-tuned on the Alpaca-cleaned dataset to support multilingual instruction responses.
Large Language Model
Transformers

S
hathibelagal
188
1
Falcon3 Mamba R1 V0
An inference-optimized model fine-tuned based on Falcon3-Mamba-7B-Instruct, utilizing the Mamba architecture for efficient inference
Large Language Model
Transformers

F
hanzla
1,690
9
Llama 3.1 8B Instuct Uz GGUF
This is a statically quantized version based on behbudiy/Llama-3.1-8B-Instuct-Uz, supporting Uzbek and English, suitable for various text generation tasks.
Large Language Model Supports Multiple Languages
L
mradermacher
241
1
Turkish Llama 8b DPO V0.1
CosmosLLaMa-Instruction DPO is a large language model specifically designed for Turkish text generation tasks, capable of continuing text fragments in a coherent and contextually appropriate manner.
Large Language Model Other
T
ytu-ce-cosmos
5,182
47
Deepseek Coder 1.3b Typescript
Other
TypeScript-specific code generation model fine-tuned from deepseek-coder-1.3b-base, supporting 16K context and Fill-in-the-Middle (FIM) functionality
Large Language Model
Transformers

D
CodeGPTPlus
72
19
Llama2 7b Mmlu
Apache-2.0
Continuously trained on the MMLU dataset based on the Llama-2-7b-hf model to improve MMLU metrics while maintaining stability in other indicators
Large Language Model
Transformers English

L
itsliupeng
120
7
Featured Recommended AI Models